skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Pougkakiotis, Spyridon"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract In this paper we present an efficient active-set method for the solution of convex quadratic programming problems with general piecewise-linear terms in the objective, with applications to sparse approximations and risk-minimization. The algorithm is derived by combining a proximal method of multipliers (PMM) with a standard semismooth Newton method (SSN), and is shown to be globally convergent under minimal assumptions. Further local linear (and potentially superlinear) convergence is shown under standard additional conditions. The major computational bottleneck of the proposed approach arises from the solution of the associated SSN linear systems. These are solved using a Krylov-subspace method, accelerated by certain novel general-purpose preconditioners which are shown to be optimal with respect to the proximal penalty parameters. The preconditioners are easy to store and invert, since they exploit the structure of the nonsmooth terms appearing in the problem’s objective to significantly reduce their memory requirements. We showcase the efficiency, robustness, and scalability of the proposed solver on a variety of problems arising in risk-averse portfolio selection,$$L^1$$ L 1 -regularized partial differential equation constrained optimization, quantile regression, and binary classification via linear support vector machines. We provide computational evidence, on real-world datasets, to demonstrate the ability of the solver to efficiently and competitively handle a diverse set of medium- and large-scale optimization instances. 
    more » « less
    Free, publicly-accessible full text available August 1, 2026
  2. We establish strong duality relations for functional two-step compositional risk-constrained learning problems with multiple nonconvex loss functions and/or learning constraints, regardless of nonconvexity and under a minimal set of technical assumptions. Our results in particular imply zero duality gaps within the class of problems under study, both extending and improving on the state of the art in (risk-neutral) constrained learning. More specifically, we consider risk objectives/constraints which involve real-valued convex and positively homogeneous risk measures admitting dual representations with bounded risk envelopes, generalizing expectations and including popular examples, such as the conditional value-at-risk (CVaR), the meanabsolute deviation (MAD), and more generally all real-valued coherent risk measures on integrable losses as special cases. Our results are based on recent advances in risk-constrained nonconvex programming in infinite dimensions, which rely on a remarkable new application of J. J. Uhl’s convexity theorem, which is an extension of A. A. Lyapunov’s convexity theorem for general, infinite dimensional Banach spaces. By specializing to the risk-neutral setting, we demonstrate, for the first time, that constrained classification and regression can be treated under a unifying lens, while dispensing certain restrictive assumptions enforced in the current literature, yielding a new state-of-the-art strong duality framework for nonconvex constrained learning. 
    more » « less
  3. Electronically tunable metasurfaces, or Intelligent Reflecting Surfaces (IRSs), are a popular technology for achieving high spectral efficiency in modern wireless systems by shaping channels using a multitude of tunable passive reflecting elements. Capitalizing on key practical limitations of IRS-aided beamforming pertaining to system modeling and channel sensing/ estimation, we propose a novel, fully data-driven Zerothorder Stochastic Gradient Ascent (ZoSGA) algorithm for general two-stage (i.e., short/long-term), fully-passive IRS-aided stochastic utility maximization. ZoSGA learns long-term optimal IRS beamformers jointly with short-term optimal precoders (e.g., WMMSE-based) via minimal zeroth-order reinforcement and in a strictly model-free fashion, relying solely on the effective compound channels observed at the terminals, while being independent of channel models or network/IRS configurations. Another remarkable feature of ZoSGA is being amenable to analysis, enabling us to establish a state-of-the-art (SOTA) convergence rate of the order of O( S −4) under minimal assumptions, where S is the total number of IRS elements, and   is a desired suboptimality target. Our numerical results on a standard MISO downlink IRS-aided sumrate maximization setting establish SOTA empirical behavior of ZoSGA as well, consistently and substantially outperforming standard fully model-based baselines. Lastly, we demonstrate that ZoSGA can in fact operate in the field, by directly optimizing the capacitances of a varactor-based electromagnetic IRS model (unknown to ZoSGA) on a multiple user/IRS, link-dense network setting, with essentially no computational overheads or performance degradation. 
    more » « less